adversarial artificial intelligence
Adversarial Artificial Intelligence Is Real
A panel of artificial intelligence (AI) experts from industry discussed some of the technology's promise and perils and predicted its future during an AFCEA TechNet Cyber Conference panel April 26 in Baltimore. The panelists were all members of AFCEA's Emerging Leaders Committee who have achieved expertise in their given fields before the age of 40. The group discussed AI in the cyber realm. Asked about "anti-AI" or "counter-AI," Brian Behe, lead data scientist, CyberPoint International, reported a recent case in which his team used a method called reinforcement learning to change the signature of malware files without altering the malware's functionality. "We use this as a way to do some security testing on other machine learning classifiers that had been built to detect malware. Sure enough, we were able to beat those classifiers," Behe explained.
Adversarial artificial intelligence: winning the cyber security battle
Artificial intelligence (AI) has come a long way since its humble beginnings. Once thought to be a technology that would struggle to find its place in the real world, it is now all around us. It can influence the ads we see, the purchases we make and the television we watch. It's also fast becoming firmly embedded in our working lives -- particularly in the world of cyber security. The Capgemini Research Institute recently found that one in five organisations used AI cyber security pre-2019, with almost two-thirds planning to implement it by 2020.
Defending Against Adversarial Artificial Intelligence
Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning. "Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient," said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems." In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application. To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario. "There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure.
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.98)
Defending against adversarial artificial intelligence
Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning. "Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient," said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems." In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application. To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario. "There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure.
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.95)